170 research outputs found

    Privatization-Safe Transactional Memories

    Get PDF
    Transactional memory (TM) facilitates the development of concurrent applications by letting the programmer designate certain code blocks as atomic. Programmers using a TM often would like to access the same data both inside and outside transactions, and would prefer their programs to have a strongly atomic semantics, which allows transactions to be viewed as executing atomically with respect to non-transactional accesses. Since guaranteeing such semantics for arbitrary programs is prohibitively expensive, researchers have suggested guaranteeing it only for certain data-race free (DRF) programs, particularly those that follow the privatization idiom: from some point on, threads agree that a given object can be accessed non-transactionally. In this paper we show that a variant of Transactional DRF (TDRF) by Dalessandro et al. is appropriate for a class of privatization-safe TMs, which allow using privatization idioms. We prove that, if such a TM satisfies a condition we call privatization-safe opacity and a program using the TM is TDRF under strongly atomic semantics, then the program indeed has such semantics. We also present a method for proving privatization-safe opacity that reduces proving this generalization to proving the usual opacity, and apply the method to a TM based on two-phase locking and a privatization-safe version of TL2. Finally, we establish the inherent cost of privatization-safety: we prove that a TM cannot be progressive and have invisible reads if it guarantees strongly atomic semantics for TDRF programs

    Lower Bounds on the Amortized Time Complexity of Shared Objects

    Get PDF

    Remote Memory References at Block Granularity

    Get PDF

    Locally Solvable Tasks and the Limitations of Valency Arguments

    Get PDF
    An elegant strategy for proving impossibility results in distributed computing was introduced in the celebrated FLP consensus impossibility proof. This strategy is local in nature as at each stage, one configuration of a hypothetical protocol for consensus is considered, together with future valencies of possible extensions. This proof strategy has been used in numerous situations related to consensus, leading one to wonder why it has not been used in impossibility results of two other well-known tasks: set agreement and renaming. This paper provides an explanation of why impossibility proofs of these tasks have been of a global nature. It shows that a protocol can always solve such tasks locally, in the following sense. Given a configuration and all its future valencies, if a single successor configuration is selected, then the protocol can reveal all decisions in this branch of executions, satisfying the task specification. This result is shown for both set agreement and renaming, implying that there are no local impossibility proofs for these tasks

    Safety of Deferred Update in Transactional Memory

    Full text link
    Transactional memory allows the user to declare sequences of instructions as speculative \emph{transactions} that can either \emph{commit} or \emph{abort}. If a transaction commits, it appears to be executed sequentially, so that the committed transactions constitute a correct sequential execution. If a transaction aborts, none of its instructions can affect other transactions. The popular criterion of \emph{opacity} requires that the views of aborted transactions must also be consistent with the global sequential order constituted by committed ones. This is believed to be important, since inconsistencies observed by an aborted transaction may cause a fatal irrecoverable error or waste of the system in an infinite loop. Intuitively, an opaque implementation must ensure that no intermediate view a transaction obtains before it commits or aborts can be affected by a transaction that has not started committing yet, so called \emph{deferred-update} semantics. In this paper, we intend to grasp this intuition formally. We propose a variant of opacity that explicitly requires the sequential order to respect the deferred-update semantics. We show that our criterion is a safety property, i.e., it is prefix- and limit-closed. Unlike opacity, our property also ensures that a serialization of a history implies serializations of its prefixes. Finally, we show that our property is equivalent to opacity if we assume that no two transactions commit identical values on the same variable, and present a counter-example for scenarios when the "unique-write" assumption does not hold

    Putting Strong Linearizability in Context: Preserving Hyperproperties in Programs That Use Concurrent Objects

    Get PDF
    It has been observed that linearizability, the prevalent consistency condition for implementing concurrent objects, does not preserve some probability distributions. A stronger condition, called strong linearizability has been proposed, but its study has been somewhat ad-hoc. This paper investigates strong linearizability by casting it in the context of observational refinement of objects. We present a strengthening of observational refinement, which generalizes strong linearizability, obtaining several important implications. When a concrete concurrent object refines another, more abstract object - often sequential - the correctness of a program employing the concrete object can be verified by considering its behaviors when using the more abstract object. This means that trace properties of a program using the concrete object can be proved by considering the program with the abstract object. This, however, does not hold for hyperproperties, including many security properties and probability distributions of events. We define strong observational refinement, a strengthening of refinement that preserves hyperproperties, and prove that it is equivalent to the existence of forward simulations. We show that strong observational refinement generalizes strong linearizability. This implies that strong linearizability is also equivalent to forward simulation, and shows that strongly linearizable implementations can be composed both horizontally (i.e., locality) and vertically (i.e., with instantiation). For situations where strongly linearizable implementations do not exist (or are less efficient), we argue that reasoning about hyperproperties of programs can be simplified by strong observational refinement of non-atomic abstract objects

    The Step Complexity of Multidimensional Approximate Agreement

    Get PDF
    Approximate agreement allows a set of n processes to obtain outputs that are within a specified distance ? > 0 of one another and within the convex hull of the inputs. When the inputs are real numbers, there is a wait-free shared-memory approximate agreement algorithm [Moran, 1995] whose step complexity is in O(n log(S/?)), where S, the spread of the inputs, is the maximal distance between inputs. There is another wait-free algorithm [Schenk, 1995] that avoids the dependence on n and achieves O(log(M/?)) step complexity where M, the magnitude of the inputs, is the absolute value of the maximal input. This paper considers whether it is possible to obtain an approximate agreement algorithm whose step complexity depends on neither n nor the magnitude of the inputs, which can be much larger than their spread. On the negative side, we prove that ?(min{(log M)/(log log M), (?log n)/(log log n)}) is a lower bound on the step complexity of approximate agreement, even when the inputs are real numbers. On the positive side, we prove that a polylogarithmic dependence on n and S/? can be achieved, by presenting an approximate agreement algorithm with O(log n (log n + log(S/?))) step complexity. Our algorithm works for multidimensional domains. The step complexity can be further restricted to be in O(min{log n (log n + log (S/?)), log(M/?)}) when the inputs are real numbers

    Poly-Logarithmic Adaptive Algorithms Require Unconditional Primitives

    Get PDF
    This paper studies the step complexity of adaptive algorithms using primitives stronger than reads and writes. We first consider unconditional primitives, like fetch&inc, which modify the value of the register to which they are applied, regardless of its current value. Unconditional primitives admit snapshot algorithms with O(log(k)) step complexity, where k is the total or the point contention. These algorithms combine a renaming algorithm with a mechanism for propagating values so they can be quickly collected. When only conditional primitives, e.g., compare&swap or LL/SC, are used (in addition to reads and writes), we show that any collect algorithm must perform Omega(k) steps, in an execution with total contention k in O(log(log(n))). The lower bound applies for snapshot and renaming, both one-shot and long-lived. Note that there are snapshot algorithms whose step complexity is polylogarithmic in n using only reads and writes, but there are no adaptive algorithms whose step complexity is polylogarithmic in the contention, even when compare&swap and LL/SC are used

    Nontrivial and Universal Helping for Wait-Free Queues and Stacks

    Get PDF
    This paper studies two approaches to formalize helping in wait-free implementations of shared objects. The first approach is based on operation valency, and it allows us to make the important distinction between trivial and nontrivial helping. We show that a wait-free implementation of a queue from common2 objects (e.g., Test&Set) requires nontrivial helping. In contrast, there is a wait-free implementation of a stack from Common2 objects with only trivial helping. This separation might shed light on the difficulty of implementing a queue from Common2 objects. The other approach formalizes the helping mechanism employed by Herlihy\u27s universal wait-free construction and is based on having an operation by one process restrict the possible linearizations of operations by other processes. We show that objects possessing such universal helping can be used to solve consensus

    Topological Characterization of Task Solvability in General Models of Computation

    Full text link
    The famous asynchronous computability theorem (ACT) relates the existence of an asynchronous wait-free shared memory protocol for solving a task with the existence of a simplicial map from a subdivision of the simplicial complex representing the inputs to the simplicial complex representing the allowable outputs. The original theorem relies on a correspondence between protocols and simplicial maps in round-structured models of computation that induce a compact topology. This correspondence, however, is far from obvious for computation models that induce a non-compact topology, and indeed previous attempts to extend the ACT have failed. This paper shows that in every non-compact model, protocols solving tasks correspond to simplicial maps that need to be continuous. It first proves a generalized ACT for sub-IIS models, some of which are non-compact, and applies it to the set agreement task. Then it proves that in general models too, protocols are simplicial maps that need to be continuous, hence showing that the topological approach is universal. Finally, it shows that the approach used in ACT that equates protocols and simplicial complexes actually works for every compact model. Our study combines, for the first time, combinatorial and point-set topological aspects of the executions admitted by the computation model
    • …
    corecore